Type :
- AI News
- AI Tools
- AI Cases
- AI Tutorial
2025-01-07 10:00:40.AIbase.14.5k
OpenAI Sets a New Standard in AI Safety: Major Release of Red Team Testing Innovations
Recently, OpenAI showcased its more proactive red team testing strategy in the field of AI safety, surpassing its competitors, especially in the critical areas of multi-step reinforcement learning and external red team testing. The two papers released by the company establish new industry standards for enhancing the quality, reliability, and safety of AI models. The first paper, 'OpenAI's AI Model and System External Red Team Testing Methodology,' highlights the effectiveness of specialized external teams in identifying security vulnerabilities that internal testing may overlook. These external teams consist of cyber...

2024-08-09 09:16:52.AIbase.10.9k
OpenAI Indicates Its Latest GPT-4o Model Has a Risk Rating of 'Medium'
OpenAI recently released the GPT-4o System Card, detailing the safety measures and risk assessments prior to the launch of the new model. GPT-4o was officially launched in May, and the assessment showed an overall risk rating of 'medium', with major risks concentrated in cybersecurity, biological threats, persuasive capability, and model autonomy. Researchers found that while GPT-4o may be more persuasive in influencing reader opinions, it did not surpass humans overall. At the time of releasing the system card, OpenAI faced criticism from internal employees and state senators questioning its decision-making.
